suicide prevention
Reviewer # 2: (I) Our algorithm can handle > 2 protected groups: in our numerical results, there are up to five protected
We sincerely thank all of you for the detailed, thoughtful, and constructive comments and feedback. We added a table of racial composition data for all networks. We incorporated all the recommendations. We improve clarity of Th. 1 by adding "In this formulation, there are two sets of variables: a) We will provide a head-to-head comparison with Table 1. We will release the code and a "readme" file with instructions, detailing the sequence of the runs.
Rethinking Suicidal Ideation Detection: A Trustworthy Annotation Framework and Cross-Lingual Model Evaluation
Dzafic, Amina, Kavut, Merve, Bayram, Ulya
Suicidal ideation detection is critical for real-time suicide prevention, yet its progress faces two under-explored challenges: limited language coverage and unreliable annotation practices. Most available datasets are in English, but even among these, high-quality, human-annotated data remains scarce. As a result, many studies rely on available pre-labeled datasets without examining their annotation process or label reliability. The lack of datasets in other languages further limits the global realization of suicide prevention via artificial intelligence (AI). In this study, we address one of these gaps by constructing a novel Turkish suicidal ideation corpus derived from social media posts and introducing a resource-efficient annotation framework involving three human annotators and two large language models (LLMs). We then address the remaining gaps by performing a bidirectional evaluation of label reliability and model consistency across this dataset and three popular English suicidal ideation detection datasets, using transfer learning through eight pre-trained sentiment and emotion classifiers. These transformers help assess annotation consistency and benchmark model performance against manually labeled data. Our findings underscore the need for more rigorous, language-inclusive approaches to annotation and evaluation in mental health natural language processing (NLP) while demonstrating the questionable performance of popular models with zero-shot transfer learning. We advocate for transparency in model training and dataset construction in mental health NLP, prioritizing data and model reliability.
Can Large Language Models Identify Implicit Suicidal Ideation? An Empirical Evaluation
Li, Tong, Yang, Shu, Wu, Junchao, Wei, Jiyao, Hu, Lijie, Li, Mengdi, Wong, Derek F., Oltmanns, Joshua R., Wang, Di
We present a comprehensive evaluation framework for assessing Large Language Models' (LLMs) capabilities in suicide prevention, focusing on two critical aspects: the Identification of Implicit Suicidal ideation (IIS) and the Provision of Appropriate Supportive responses (PAS). We introduce \ourdata, a novel dataset of 1,308 test cases built upon psychological frameworks including D/S-IAT and Negative Automatic Thinking, alongside real-world scenarios. Through extensive experiments with 8 widely used LLMs under different contextual settings, we find that current models struggle significantly with detecting implicit suicidal ideation and providing appropriate support, highlighting crucial limitations in applying LLMs to mental health contexts. Our findings underscore the need for more sophisticated approaches in developing and evaluating LLMs for sensitive psychological applications.
- North America > United States > Washington > King County > Seattle (0.04)
- Asia > Middle East > Jordan (0.04)
- Asia > Macao (0.04)
Lexicography Saves Lives (LSL): Automatically Translating Suicide-Related Language
Schoene, Annika Marie, Ortega, John E., Zevallos, Rodolfo Joel, Ihle, Laura Haaber
Recent years have seen a marked increase in research that aims to identify or predict risk, intention or ideation of suicide. The majority of new tasks, datasets, language models and other resources focus on English and on suicide in the context of Western culture. However, suicide is global issue and reducing suicide rate by 2030 is one of the key goals of the UN's Sustainable Development Goals. Previous work has used English dictionaries related to suicide to translate into different target languages due to lack of other available resources. Naturally, this leads to a variety of ethical tensions (e.g.: linguistic misrepresentation), where discourse around suicide is not present in a particular culture or country. In this work, we introduce the 'Lexicography Saves Lives Project' to address this issue and make three distinct contributions. First, we outline ethical consideration and provide overview guidelines to mitigate harm in developing suicide-related resources. Next, we translate an existing dictionary related to suicidal ideation into 200 different languages and conduct human evaluations on a subset of translated dictionaries. Finally, we introduce a public website to make our resources available and enable community participation.
- North America > The Bahamas (0.14)
- Asia > Myanmar (0.04)
- North America > United States > Virginia (0.04)
- (5 more...)
Towards Suicide Prevention from Bipolar Disorder with Temporal Symptom-Aware Multitask Learning
Lee, Daeun, Son, Sejung, Jeon, Hyolim, Kim, Seungbae, Han, Jinyoung
Bipolar disorder (BD) is closely associated with an increased risk of suicide. However, while the prior work has revealed valuable insight into understanding the behavior of BD patients on social media, little attention has been paid to developing a model that can predict the future suicidality of a BD patient. Therefore, this study proposes a multi-task learning model for predicting the future suicidality of BD patients by jointly learning current symptoms. We build a novel BD dataset clinically validated by psychiatrists, including 14 years of posts on bipolar-related subreddits written by 818 BD patients, along with the annotations of future suicidality and BD symptoms. We also suggest a temporal symptom-aware attention mechanism to determine which symptoms are the most influential for predicting future suicidality over time through a sequence of BD posts. Our experiments demonstrate that the proposed model outperforms the state-of-the-art models in both BD symptom identification and future suicidality prediction tasks. In addition, the proposed temporal symptom-aware attention provides interpretable attention weights, helping clinicians to apprehend BD patients more comprehensively and to provide timely intervention by tracking mental state progression.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > California > Los Angeles County > Long Beach (0.05)
- Asia > South Korea > Seoul > Seoul (0.04)
- (4 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
AI tech identifies suicide risk in military veterans before it's too late: 'Flipping the model'
U.S. Marine Corps veteran Adam Cooper is joined by Army veteran Lowell Koppert as he nears the end of his 22-hour workout and shares his'radical' pledge to bring more awareness to the issue of veteran suicides. If you or someone you know is having thoughts of suicide, please contact the Suicide & Crisis Lifeline at 988 or 1-800-273-TALK (8255). As the mental health of U.S. military veterans remains a major concern among many people in our society, new technology could become a lifesaver. An AI platform developed by ClearForce, a tech company in Vienna, Virginia, aims to identify the risk of suicide among veterans before it's too late. Col. Michael Hudson, vice president at ClearForce, spoke to Fox News Digital in an interview to discuss his efforts on the veteran suicide initiative.
- North America > United States > Virginia > Fairfax County > Vienna (0.25)
- North America > United States > California (0.05)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
Artificial intelligence may improve suicide prevention in the future
The loss of any life can be devastating, but the loss of a life from suicide is especially tragic. Around nine Australians take their own life each day, and it is the leading cause of death for Australians aged 15–44. Suicide attempts are more common, with some estimates stating that they occur up to 30 times as often as deaths. "Suicide has large effects when it happens. It impacts many people and has far-reaching consequences for family, friends and communities," says Karen Kusuma, a UNSW Sydney PhD candidate in psychiatry at the Black Dog Institute, who investigates suicide prevention in adolescents.
Detecting Suicide Risk in Online Counseling Services: A Study in a Low-Resource Language
Bialer, Amir, Izmaylov, Daniel, Segal, Avi, Tsur, Oren, Levi-Belz, Yossi, Gal, Kobi
With the increased awareness of situations of mental crisis and their societal impact, online services providing emergency support are becoming commonplace in many countries. Computational models, trained on discussions between help-seekers and providers, can support suicide prevention by identifying at-risk individuals. However, the lack of domain-specific models, especially in low-resource languages, poses a significant challenge for the automatic detection of suicide risk. We propose a model that combines pre-trained language models (PLM) with a fixed set of manually crafted (and clinically approved) set of suicidal cues, followed by a two-stage fine-tuning process. Our model achieves 0.91 ROC-AUC and an F2-score of 0.55, significantly outperforming an array of strong baselines even early on in the conversation, which is critical for real-time detection in the field. Moreover, the model performs well across genders and age groups.
- North America > United States (0.04)
- Asia > Middle East > Israel (0.04)
- Asia > China > Hong Kong (0.04)
How suicide prevention is getting a boost from artificial intelligence: Exclusive
Suicide prevention is getting a boost from artificial intelligence. The Trevor Project, the world's largest suicide prevention and crisis intervention organization for LGBTQ youth, has launched a "Crisis Contact Simulator" to help train counselors and prepare them to support youth in crisis. Developed in collaboration with Google, the first-of-its-kind technology is an AI-powered counselor training tool that simulates digital conversations and allows trainees to practice realistic conversations with youth personas. "Riley," the organization's first Crisis Contact Simulator persona, emulates messages from a teen in North Carolina who feels anxious and depressed. In addition to Riley, the organization is currently developing a variety of personas that represent a wide range of life situations, backgrounds, sexual orientations, gender identities and risk levels.
- North America > United States > North Carolina (0.25)
- North America > United States > New York (0.05)
- North America > United States > California > Los Angeles County > Los Angeles (0.05)
How Cyber Safety Artificial Intelligence Helps Students In K-12 Technology - Security Boulevard
Artificial Intelligence is taking K-12 cyber safety monitoring to the next level Issues with cyber safety in schools are on the rise. Gaggle recently reported a 66% jump in the number of cyber safety incidents in the first three months of the 2020-21 school year compared to the same time in the 2019-20 school year. The post How Cyber Safety Artificial Intelligence Helps Students In K-12 Technology appeared first on ManagedMethods.
- Education (1.00)
- Health & Medicine > Consumer Health (0.39)
- Information Technology > Security & Privacy (0.37)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (0.32)